Cognitive, Affective, & Behavioral Neuroscience
○ Springer Science and Business Media LLC
All preprints, ranked by how well they match Cognitive, Affective, & Behavioral Neuroscience's content profile, based on 25 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Feldman, R. L.; Quale, M.; Etzel, J. A.; Braver, T. S.
Show abstract
Recent prior work suggests a preferential relationship between working memory capacity (WMC) and proactive control, yet the neural mechanisms that support this relationship are still not well understood. We directly addressed this question by leveraging the Dual Mechanisms of Cognitive Control (DMCC) project, as it employed a fMRI neuroimaging design optimized to test for individual differences (sample N > 100), with task variants that independently assessed proactive and reactive control relative to baseline conditions. Behavioral analyses replicated prior work with the AX-CPT paradigm, in which a measure of target preparation based on contextual cues (the A-cue Bias index) was both reliably increased under task conditions encouraging proactive control and positively associated with WMC. Analyses of fMRI activity indicated that A-cue Bias was selectively linked to increased cue-related neural activity in left motor cortex (lMOT). Additionally, WMC was associated with increased cue-related activation in right dorsolateral prefrontal cortex (rDLPFC), even when statistically controlling for baseline and reactive conditions. The relationship between these two effects was supported by a latent path analysis, which suggested that the rDLPFC-lMOT circuit preferentially mediates the WMC-A-cue Bias relationship present under proactive task conditions. The results suggest this neural circuit may translate strategic task goals into active response preparation as a mechanism of proactive control. Individuals high in WMC may be better able to implement proactive task strategies when instructed via contextual cues. The sensitivity of the rDLPFC-lMOT circuit to individual differences suggest it as a potential target for cognitive enhancement.
Mayer, A. V.; Schroeder, A.; Stolz, D. S.; Czekalla, N.; Paulus, F. M.; Mueller-Pinzler, L.; Krach, S.; Kube, T.
Show abstract
Healthy individuals typically attribute successes to internal causes, such as their abilities, and failures to external factors, like bad luck. In contrast, individuals with depression and low self-esteem are more likely to attribute failures to internal causes and successes to external causes. At the same time, depression and low self-esteem are associated with negatively biased self-related learning and self-beliefs. Although causal attributions have been shown to influence belief formation and updating, the dynamic interaction between real-time attributions and self-related learning remains poorly understood. In this study, we used a validated self-related learning task to investigate how internal versus external attributions of performance feedback affect the formation of self-beliefs and how these processes relate to depressive symptoms and self-esteem. Drawing on a computational model that incorporates prediction error valence and causal attributions, we found that participants updated their self-beliefs less when feedback was attributed to external causes. Furthermore, individuals with higher levels of depression and lower self-esteem showed a stronger negativity bias in learning. Lower self-esteem was also linked to a reduced self-serving bias in attributions. These findings provide insight into the cognitive mechanisms that may contribute to the development and maintenance of negative self-beliefs commonly observed in depression.
Gheza, D.; Kool, W.; Pourtois, G.
Show abstract
When making decisions, humans aim to maximize rewards while minimizing costs. The exertion of mental or physical effort has been proposed to be one those costs, translating into avoidance of behaviors carrying effort demands. This motivational framework also predicts that people should experience positive affect when anticipating demand that is subsequently avoided (i.e., a "relief effect"), but evidence for this prediction is scarce. Here, we follow up on a previous study (1) that provided some initial evidence that people more positively evaluated outcomes if it meant they could avoid performing an additional demanding task. However, the results from this study did not provide evidence that this effect was driven by effort avoidance. Here, we report two experiments that are able to do this. Participants performed a gambling task, and if they did not receive reward they would have to perform an orthogonal effort task. Prior to the gamble, a cue indicated whether this effort task would be easy or hard. We probed hedonic responses to the reward-related feedback, as well as after the subsequent effort task feedback. Participants reported lower hedonic responses for no-reward outcomes when high vs. low effort was anticipated (and later exerted). They also reported higher hedonic responses for reward outcomes when high vs. low effort was anticipated (and avoided). Importantly, this relief effect was smaller in participants with high need for cognition. These results suggest that avoidance of high effort tasks is rewarding, but that the size off this effect depends on the individual disposition to engage with and expend cognitive effort. They also raise the important question of whether this disposition alters the cost of effort per se, or rather offset this cost during cost-benefit analyses.
Cheng, Z.; Ging-Jehli, N. R.; Tarlow, M.; Kim, J.; Chase, H. W.; Arora, M.; Bonar, L.; Stiffler, R.; Grattery, A.; Graur, S.; Frank, M. J.; Phillips, M. L.; Shenhav, A.
Show abstract
To behave adaptively, people need to integrate information about probabilistic outcomes and balance drives to approach positive outcomes and avoid negative outcomes. However, questions remain about how uncertainty in positive and negative outcomes influence approach-avoid decision-making dynamics. To fill this gap, we developed a novel Probabilistic Approach Avoidance Task (PAAT) and characterized behavior in this task using sequential sampling models In this task, participants (Study 1: blinded mixed clinical sample N=34; Study 2: online nonpsychiatric sample N = 58) made a series of choices between pairs of options, each consisting of variable probabilities of reaching a positive outcome (monetary reward) and of reaching a negative outcome (aversive image). Participants tended to choose options that maximized the likelihood of reward and minimized the likelihood of aversive outcomes. Moreover, the weights they placed on each of these differed for choices where these likelihoods were in opposition (i.e., the riskier option was also more rewarding; incongruent trials) relative to when these were aligned (congruent trials). Computational modeling revealed that the relative influence of rewarding and aversive outcomes on choice was captured by differences in the rate of decision-relevant information accumulation. These modeling results were validated with a series of model comparisons and posterior predictive checks, demonstrating that our sequential sampling models reliably captured our behavioral data. Together, these findings improve our understanding of the influence of motivational conflict, outcome type, and levels of uncertainty on approach-avoid decision-making.
Prater Fahey, M.; Yee, D. M.; Leng, X.; Tarlow, M.; Shenhav, A.
Show abstract
It is well known that people will exert effort on a task if sufficiently motivated, but how they distribute these efforts across different strategies (e.g., efficiency vs. caution) remains uncertain. Past work has shown that people invest effort differently for potential positive outcomes (rewards) versus potential negative outcomes (penalties). However, this research failed to account for differences in the context in which negative outcomes motivate someone - either as punishment or reinforcement. It is therefore unclear whether effort profiles differ as a function of outcome valence, motivational context, or both. Using computational modeling and our novel Multi-Incentive Control Task, we show that the influence of aversive outcomes on ones effort profile is entirely determined by their motivational context. Participants (N:91) favored increased caution in response to larger penalties for incorrect responses, and favored increased efficiency in response to larger reinforcement for correct responses, whether positively or negatively incentivized. Statement of RelevancePeople have to constantly decide how to allocate their mental effort, and in doing so can be motivated by both the positive outcomes that effort accrues and the negative outcomes that effort avoids. For example, someone might persist on a project for work in the hopes of being promoted or to avoid being reprimanded or even fired. Understanding how people weigh these different types of incentives is critical for understanding variability in human achievement as well as sources of motivational impairments (e.g., in major depression). We show that people not only consider both potential positive and negative outcomes when allocating mental effort, but that the profile of effort they engage under negative incentives differs depending on whether that outcome is contingent on sustaining good performance (negative reinforcement) or avoiding bad performance (punishment). Clarifying the motivational factors that determine effort exertion is an important step for understanding motivational impairments in psychopathology.
Mason, L.; Woelk, S.; Eldar, E.; Rutledge, R.
Show abstract
BackgroundIntuitively, emotional states guide not only the actions we take, but also our confidence in those actions. This sets the stage for subjective confidence about the best action to take to diverge from the actual likelihood and, clinically, may give rise to over-confidence and risky behaviours during episodes of elevated mood and the reverse during depressive episodes. Whilst computational models have been proposed to explain how emotional states recursively bias perception of action outcomes, these models have not been extended to capture the impacts of mood on confidence. Here we propose a computational model that formalises confidence and its relationship with learning from outcomes and emotional states. MethodsWe collected data both in a laboratory context (n=35) and in pre-registered online replication (n=106; https://osf.io/ygc4t). Participants completed a two-armed bandit task, with learning blocks before and after a mood manipulation in which participants unexpectedly received (positive mood induction) or lost (negative mood induction) a relatively large sum of money. Participants periodically reported their decision confidence throughout the task. We examined the extent to which the mood manipulation biased their confidence, predicting that positive and negative moods would lead to over- and under-confidence, respectively. We further predicted that this effect would be stronger in participants with greater propensity towards strong and changeable moods, measured by the Hypomanic Personality Scale. Moreover, we formalized a computational model in which confidence emerges as the difference between the perceived likelihood of reward for the available options. In this model, mood indirectly biases confidence through recursively biased learning of the reward likelihoods for the available options and not from simply shifting overall confidence up or down. ResultsIn both experiments, we confirmed that moods impacted confidence in the hypothesised direction; absent of any differences in participants objective performance, average confidence was higher following positive mood induction, and lower following negative mood induction. This effect was larger in participants with higher levels of trait hypomania. Intriguingly, we found that the effect of mood on confidence emerged in concert with learning. Indeed, whilst the shift in mood was greatest immediately post-mood manipulation and returned to baseline by the end of the learning block, the effect of mood on confidence gradually accumulated over learning trials, peaking at the end of the block. These dynamics were captured by simulations of a "Moody Likelihood" model. Empirically, this model simultaneously accounted for the effects of mood on choices, mood states and confidence through a mood bias parameter. ConclusionWe present a unified model in which moods recursively bias reward learning and, consequently, confidence in decision making. Moods fundamentally bias the accumulation of reward likelihood, rather than directly biasing decision confidence. Clinically, these findings have implications for understanding two core symptoms of mood disorder, suggesting that both perturbed mood and confidence about goal-directed behaviour arise from a common bias during reward learning.
Sayali, C.; Heling, E.; Cools, R.
Show abstract
Cognitively demanding tasks are often perceived as costly due to the cognitive control resources they require, leading to effort avoidance, particularly in psychiatric populations with motivational impairments. Research on anxiety and cognitive effort are mixed: some studies suggest anxiety increases the perceived effort cost and avoidance, while others indicate that cognitive effort engagement can serve as an adaptive coping strategy. To reconcile these perspectives, we examined the interaction between state and trait anxiety on cognitive effort evaluation and engagement in two experiments. We hypothesized that state anxiety enhances task engagement as difficulty increases, and that this effect is diminished in individuals with high trait anxiety. Experiment 1 assessed self-reported anxiety in an online sample, while Experiment 2 manipulated state anxiety through autobiographical recall. Both experiments employed flow induction and effort discounting paradigms. Across both studies, the effect of state anxiety on task engagement depended on trait anxiety, but the direction of the state anxiety effect was opposite to the effect we predicted. In Experiment 1, participants with low trait anxiety reported reduced task engagement, as indexed by lower flow scores, when state anxiety was higher, but only in easy tasks. This effect was attenuated in participants with higher trait anxiety. The same pattern was observed in Experiment 2, but this time the interaction between trait and state anxiety was present regardless of task difficulty. These findings suggest that trait anxiety may reflect reduced impact of state anxiety on task disengagement. Public significance statementThis study demonstrated that effects of state anxiety on task disengagement depend on individual differences in trait anxiety. People with higher trait anxiety reported reduced effects of state anxiety on task disengagement.
Paul, A.; Segreti, M.; Marc, I. B.; Fiorenza, M. T.; Canterini, S.; Ramawat, S.; Bardella, G.; Pani, P.; Ferraina, S.; Brunamonti, E.
Show abstract
Understanding the ordinal relationships between items requires constructing a rank order supporting decision-making between options. This process depends on the ability to learn reciprocal relationships and to select the best option available when making a choice. In such forms of decision-making, the prefrontal cortex (PFC) plays a crucial role in encoding the relative value of alternatives as a decision is formed. Higher-order cognitive abilities are influenced by genetic factors that affect dopamine availability in the PFC, potentially contributing to individual differences. Here, we examined the performance of 83 participants in a transitive inference task (TI), grouped by genotype based on the Val158Met single-nucleotide polymorphism in the Catechol-O-Methyltransferase (COMT) gene. The task included a learning phase in which participants acquired the reciprocal relationships among a set of hierarchically ranked items (A>B>C>D>E>F), followed by a test phase in which they were required to compare all possible item pairs and select the higher-ranked one. While genotype did not significantly influence test-phase performance, it did affect learning efficiency. Specifically, Val homozygotes took a longer learning procedure than both heterozygotes and Met homozygotes during the learning phase. Drift diffusion modelling (DDM) revealed that task performance was explained by the efficiency of evidence accumulation, which was lower in Val homozygotes, accounting for their poorer performance not only during initial learning but also when required to switch to a reversed hierarchical structure (A<B<C<D<E<F). These findings suggest that individual differences in inferential decision-making and cognitive flexibility may be partially driven by genetically determined variations in prefrontal dopamine availability.
Kalhan, S.; Schwartenbeck, P.; Hester, R.; Garrido, M. I.
Show abstract
Adaptive behaviours depend on dynamically updating internal representations of the world based on the ever-changing environmental contingencies. People with a substance use disorder (pSUD) show maladaptive behaviours with high persistence in drug-taking, despite severe negative consequences. We recently proposed a salience misattribution model for addiction (SMMA; Kalhan et al., (2021)), arguing that pSUD have aberrations in their updating processes where drug cues are misattributed as strong predictors of positive outcomes, but weaker predictors of negative outcomes. We also argue that conversely, non-drug cues are misattributed as weak predictors of positive outcomes, but stronger predictors of negative outcomes. However, these hypotheses need to be empirically tested. Here we used a multi-cue reversal learning task, with reversals in whether drug or non-drug cues are currently relevant in predicting the outcome (monetary win or loss). We show that compared to controls, people with a tobacco use disorder (pTUD), do form misaligned internal representations. We found that pTUD updated less towards learning the drug cues relevance in predicting a loss. Further, when neither drug nor non-drug cue predicted a win, pTUD updated more towards the drug cue being relevant predictors of that win. Our Bayesian belief updating model revealed that pTUD had a low estimated likelihood of non-drug cues being predictors of wins, compared to drug cues, which drove the misaligned updating. Overall, several hypotheses of the SMMA were supported, but not all. Our results implicate that strengthening the non-drug cue association with positive outcomes may help restore the misaligned internal representation in pTUD.
Niu, Y.; Hosseini, K.; Pena, A.; Rodriguez, C.; Buzzell, G. A.
Show abstract
The error-related negativity (ERN) and correct-related negativity (CRN) are event-related potentials (ERPs) that reflect performance monitoring following error and correct responses, respectively. Prior work demonstrates the ERN is sensitive to the motivational significance of errors, which increases under social observation. However, most studies testing how social observation impacts performance monitoring rely on trial-averaged ERPs, potentially obscuring meaningful fluctuations in ERN/CRN over time. Here, we had participants complete a Flanker task twice (social observation vs. alone) and employed mixed-effects modeling of single-trial ERPs to test if social observation impacts ERN/CRN trajectories over short (within blocks) or long (between blocks) timescales. We found that social observation selectively influenced ERN/CRN trajectories over short timescales: for blocks performed under social observation (but not alone), ERN magnitudes increased across trials and CRN magnitudes decreased. At longer timescales, ERN/CRN significantly decreased across all blocks, regardless of social observation and consistent with a vigilance decrement. To our knowledge, this is the first demonstration that social observation influences performance monitoring trajectories over short timescales. Results highlight the importance of analyzing ERN/CRN trajectories over relatively short timescales to fully characterize the impact of social observation on performance monitoring dynamics. These findings lay the groundwork for future investigation into whether social observation interacts with individual differences in motivation/affect to differentially impact performance monitoring dynamics.
Taghizadeh Sarabi, M.; Zimmermann, E.
Show abstract
The question we addressed in the current study is whether the mere prospect of monetary reward affects subjective time perception. To test this question, we collected trail-based confidence reports in a task in which subjects made categorical decisions about probe durations relative to the reference duration. When there was a potential to gain monetary reward, the duration was perceived to be longer than in the neutral condition, and confidence, which reflects the perceived probability of being correct, was higher in the reward condition than in the neutral condition. We found that confidence influences the sense of time in different individuals: subjects with high-confidence reported that they perceived the duration signaled by the monetary gain condition as longer than subjects with low-confidence. Our results showed that only high-confidence individuals overestimated the monetary gain context. Finally, we found a negative relationship between confidence and time perception, and that confidence bias at the maximum uncertainty duration of 450 ms is predictive of time perception. Taken together, the current study demonstrates that subjective measure of the confidence profile caused overestimation of time rather than by the outcome valence of reward expectancy.
Spronkers, F. S.; Koolschijn, R. S.; Daw, N. D.; Otto, A. R.; den Ouden, H. E. M.
Show abstract
An extensive body of literature has shown that humans tend to avoid expending cognitive effort, just like for physical effort or financial resources. How then, do we decide whether to put this effort in? Decision-making not only involves choosing our actions, but also the meta-decision of how much cognitive effort to invest in making this choice, weighing the costs of cognitive effort against potential rewards. Popular recent theories, grounded in the field of reinforcement learning, suggest that this cost-benefit trade-off can be informed by the opportunity costs of effort investment, which the brain may approximate by the estimated average reward rate per unit time. It follows from intuition that in a low reward environment, investing cognitive resources in the task at hand will less likely lead to missed opportunities. Recent studies provided support for this idea, showing that people exert more cognitive effort when reward rate is low. Here, we replicate one of the key previous findings but provide an important nuance to this result. Cognitive effort allocation was better explained by participants recent performance history (i.e. accuracy rate) than average reward rate. In combination with the observation that participants were insensitive to the reward currently at stake, this invites a reinterpretation of these previous findings and suggests the need for further studies to assess whether environmental richness may indeed serve as a heuristic to modulate cognitive effort allocation.
Tahamtan, Z.; Osia, S. A.; Herman, P. A.
Show abstract
Task-switching paradigms, often used to study cognitive flexibility, frequently employ incongruent bivalent stimuli, triggering two tasks and potentially conflating cognitive flexibility with interference control. This study assesses cognitive flexibility using univalent stimuli (triggering one task) and congruent bivalent stimuli (same response across tasks) in a modified Stroop task to investigate well-established neural activity correlates of cognitive flexibility, manifested in human (females and males) electroencephalography (EEG) recordings, while isolating switch process from the influence of interference control at the response level. In particular, we analyzed EEG theta-band activity and event-related potential (ERP) components in switch (N2, P3a, P3b, late sustained potential (LSP)) and interference (N400, LSP and mid-frontal theta activity) conditions. We compared each of the switch and the interference condition to the control condition using cluster-based permutation test. In the switch condition, we observed fronto-central N2, reduced frontal P3a, and a positive occipital LSP. The interference condition showed increased frontal theta, parietal N400, and a positive occipital LSP. We also compared switch and interference conditions using cluster-based permutation test. We observed a larger N2 in the frontocentral regions during the switch condition and higher frontal theta activity during the interference condition, which aligns with their comparisons to the control condition. This result suggests that distinct neural mechanisms are used for each of the processes involved in conflict monitoring. Specifically, the theta activity may reflect sustained monitoring and conflict resolution during interference, while the N2 may reflect more transient conflict detection and the need to switch task sets. Significance StatementCognitive flexibility--the ability to adapt behavior to a changing environment--is essential for goal-directed actions. For assessing cognitive flexibility unlike the typical approach, we did not use incongruent bivalent stimuli. We used univalent and congruent bivalent stimuli to isolate cognitive flexibility while assessing interference control separately with incongruent bivalent stimuli. We analyzed well-documented brain activities related to cognitive flexibility (P3b, N2, P3a, LSP) and interference control (N400, LSP), and mid-frontal EEG theta activity. Despite using different stimuli, we observed all expected components associated with the switch process except for the P3b. Both processes share common parietal activity, and while the frontal lobe plays a role in both, its activity differs between them.
Brands, A. M.; Knauth, K.; Mathar, D.; Lee, S.; Kuzmanovic, B.; Tittgemeyer, M.; Peters, J.
Show abstract
Disordered gambling has been linked to impairments in goal-directed (model-based) control and reinforcement learning. Here we investigated the potential neural basis of this impairment using a sequential reinforcement learning task (modified two-step-task), computational modeling, and functional magnetic resonance imaging (fMRI) in individuals exhibiting symptoms of disordered gambling (GD) and matched healthy controls (HC, n=30 per group). Model-agnostic analyses replicated the effects of reduced performance and reduced model-based control in the gambling group, both in terms of choice and response time effects. Computational modeling of choice behavior confirmed that this effect was due to reduced model-based control in the gambling group. Analyses of choices and response times using drift diffusion modeling revealed a more complex pattern, where behavioral impairments in the gambling group were linked to changes across several parameters reflecting drift rate modulation and asymptote, as well as non-decision time. Despite these pronounced behavioral differences, the gambling group exhibited largely intact neural effects related to the task transition structure, reward feedback and trial-to-trial behavioral adjustments. Results are discussed with respect to current neurocomputational models of behavioral dysregulation in disordered gambling.
Xin, F.; Lai, J.; Guo, M.; Chen, Q.; Wu, J.
Show abstract
Choice consistency is a fundamental aspect of rational decision-making, reflecting the stability and reliability of an individuals preferences. However, real-world decision-making often deviates from this ideal, as individuals frequently make irrational or inconsistent choices in value-based decision-making. This study combined computational modeling, neuroimaging, and behavioral assessments to elucidate the mechanisms by which stress and memory affect choice consistency. Remembered items exhibited higher choice consistency compared to forgotten items. Computational modeling further indicated that the drift rate was higher, and the decision threshold lower, for remembered food items compared to forgotten ones. Stress was found to impair both choice consistency and memory retrieval, with stress-induced declines in memory accuracy positively correlating with reductions in choice reaction times. Activation of the dorsolateral prefrontal cortex (DLPFC) during the pre-choice anticipation period was positively associated with choice consistency. Similarly, activation of the orbitofrontal cortex (OFC) during the memory retrieval of food stimuli correlated with improved memory accuracy. These findings suggest that stress may impair choice consistency by disrupting memory retrieval processes. Overall, our study provides novel insights into the role of stress and memory in decision-making, offering a more nuanced understanding of the neural and cognitive processes that govern choice behavior.
Fernandes, P.; Lee, S.; Kable, J. W.; Seidel, P.; Almeida, J.; Eriksson, J.; de Sousa, B.; Bergstrom, F.
Show abstract
The relationship between conscious awareness and decisions has been heavily debated. Here we investigated whether subliminal probabilities are integrated with conscious rewards to form subjective value (SV) representations in the anterior ventral striatum (aVS) and ventromedial prefrontal cortex (vmPFC). Participants played an incentivized competitive game with risky choice to accumulate points across trials in a behavioral and fMRI experiment. The game was a modified attentional-blink paradigm that rendered a probability cue unseen (indicating a 100% or 0% chance to win a risky reward). Following the probability cue, participants chose between a safe (1 point with certainty) or risky option (>1 or 0 points depending on probability cue). The risky reward was either 2 or 5 points, varying across trials. In some trials the probability cue was absent (replaced by a random distractor) and the probability to win the risky reward was 50%. When probability cues were unseen, they did not influence choice, as value-maximizing choice (d) was not greater than chance, but they did influence reaction time in both experiments. Consistent with SV integration, the BOLD signal in aVS and vmPFC was higher for both conscious rewards (high > low) and subliminal probabilities (high > low) and could not be explained by subliminal salience (cue present > absent). Moreover, multivariate pattern similarity between conscious rewards and subliminal probabilities in vmPFC suggest integration into an abstract value representation. Additionally, we found brain-wide subliminal probability and salience effects. Taken together, these results suggest that conscious awareness is not necessary for probability to be integrated with conscious rewards to form an abstract "common currency" SV representation in vmPFC. Additionally, brain-wide subliminal probability and salience effects suggests information can have "global access" without conscious awareness.
Kobor, A.; Kardos, Z.; Takacs, A.; Elteto, N.; Janacsek, K.; Toth-Faber, E.; Csepe, V.; Nemeth, D.
Show abstract
Both primarily and recently encountered information have been shown to influence experience-based risky decision making. The primacy effect predicts that initial experience will influence later choices even if outcome probabilities change and reward is ultimately more or less sparse than primarily experienced. However, it has not been investigated whether extended initial experience would induce a more profound primacy effect upon risky choices than brief experience. Therefore, the present study tested in two experiments whether young adults adjusted their risk-taking behavior in the Balloon Analogue Risk task after an unsignaled and unexpected change point. The change point separated early "good luck" or "bad luck" trials from subsequent ones. While mostly positive (more reward) or mostly negative (no reward) events characterized the early trials, subsequent trials were unbiased. In Experiment 1, the change point occurred after one-sixth or one-third of the trials (brief vs. extended experience) without intermittence, whereas in Experiment 2, it occurred between separate task phases. In Experiment 1, if negative events characterized the early trials, after the change point, risk-taking behavior increased as compared with the early trials. Conversely, if positive events characterized the early trials, risk-taking behavior decreased after the change point. Although the adjustment of risk-taking behavior occurred due to integrating recent experiences, the impact of initial experience was simultaneously observed. The length of initial experience did not reliably influence the adjustment of behavior. In Experiment 2, participants became more prone to take risks as the task progressed, indicating that the impact of initial experience could be overcome. Altogether, we suggest that initial beliefs about outcome probabilities can be updated by recent experiences to adapt to the continuously changing decision environment.
Brands, A. M.; Knauth, K.; Mathar, D.; Roedder, T.; Lisner, K.; Peters, J.
Show abstract
The catecholamine precursor tyrosine has been linked to improved cognitive performance, but investigations into decision-making and reinforcement learning processes known to be under catecholamine control are sparse. We examined the impact of a single dose of Tyrosine (2g) on reinforcement learning and exploration in a large (n=63) gender-balanced sample in a within-subjects preregistered study. Reinforcement learning performance was improved under Tyrosine, and computational modeling revealed that this performance increase was due to a stabilization of choice behavior reflected in increased value-driven exploitation. Further non-preregistered modeling analyses confirmed that accounting for higher-order perseveration substantially improved model fit, and substantiated the observation of increased value-driven exploitation under Tyrosine. Furthermore, it revealed a more fine-grained computational impact of Tyrosine, showing attenuated effects of directed exploration and value-independent perseveration. Supplementation with Tyrosine therefore improved reinforcement learning performance by stabilizing choice patterns in the service of optimizing reward accumulation. Results confirm that Tyrosine supplementation modulates specific computational mechanisms thought to be under catecholamine control.
Hoskin, R.; Pernet, C.; Talmi, D.
Show abstract
With an aim to understand how brains compute the expected utility of mixed prospects, namely those associated with both negative and positive attributes, we designed a task which equated the opportunity to learn about these attributes and their hedonic value. Participants underwent fMRI scanning while they experienced a classical conditioning paradigm where emotionally-neutral faces predicted a probability of pain and reward conforming to a 2 (Electric Pain: high, low) x 2 (Monetary Reward: high, low) factorial design. We found a robust interaction between the anticipation of pain and reward in the BOLD signal. Analysis of simple effects revealed that sensitivity to each attribute increased under high levels of the other attribute. In the bilateral insula and mid-cingulate gyrus sensitivity to pain was greater under high reward, and in the OFC, caudate, ventral striatum and VTA sensitivity to reward was greater under high pain. We speculate that this pattern is due to dynamic shifts in the reference point participants considered to evaluate each attribute.
Weis, J. F.; Krach, S.; Paulus, F. M.; Stolz, D. S.
Show abstract
Successes and failures shape affective experience and self-esteem. Self-compassion has been proposed as a protective factor, allowing individuals to acknowledge both strengths and shortcomings without excessive self-criticism. However, the mechanisms through which self-compassion influences changes in affect and self-esteem remain poorly understood. Here, we experimentally tested whether self-compassion modulates the links between performance feedback, affective experience, and self-esteem. Participants completed an effortful performance task and received trial-by-trial feedback while repeatedly rating their positive affect. Results show that self-compassion buffered against declines in self-esteem among poorly performing individuals, predicted higher overall positive affect throughout the task, and was associated with increased post-task self-esteem. Moreover, performance feedback predicted positive affect, which in turn predicted post-task self-esteem, although these pathways were not moderated by self-compassion. Together, these findings add to the growing evidence for how self-compassion impacts positive affect and self-esteem and may inform treatment strategies for clinical populations characterized by low self-esteem or heightened self-criticism.